10 research outputs found
Multiphase Geometric Couplings for the Segmentation of Neural Processes
The ability to constrain the geometry of deformable models for image segmentation can be useful when information about the expected shape or positioning of the objects in a scene is known a priori. An example of this occurs when segmenting neural cross sections in electron microscopy. Such images often contain multiple nested boundaries separating regions of homogeneous intensities. For these applications, multiphase level sets provide a partitioning framework that allows for the segmentation of multiple deformable objects by combining several level set functions. Although there has been much effort in the study of statistical shape priors that can be used to constrain the geometry of each partition, none of these methods allow for the direct modeling of geometric arrangements of partitions. In this paper, we show how to define elastic couplings between multiple level set functions to model ribbon-like partitions. We build such couplings using dynamic force fields that can depend on the image content and relative location and shape of the level set functions. To the best of our knowledge, this is the first work that shows a direct way of geometrically constraining multiphase level sets for image segmentation. We demonstrate the robustness of our method by comparing it with previous level set segmentation methods.Engineering and Applied Science
Recommended from our members
Segmentation fusion for connectomics
We address the problem of automatic 3D segmentation of a stack of electron microscopy sections of brain tissue. Unlike previous efforts, where the reconstruction is usually done on a section-to-section basis, or by the agglomerative clustering of 2D segments, we leverage information from the entire volume to obtain a globally optimal 3D segmentation. To do this, we formulate the segmentation as the solution to a fusion problem. We first enumerate multiple possible 2D segmentations for each section in the stack, and a set of 3D links that may connect segments across consecutive sections. We then identify the fusion of segments and links that provide the most globally consistent segmentation of the stack. We show that this two-step approach of pre-enumeration and posterior fusion yields significant advantages and provides state-of-the-art reconstruction results. Finally, as part of this method, we also introduce a robust rotationally-invariant set of features that we use to learn and enumerate the above 2D segmentations. Our features outperform previous connectomic-specific descriptors without relying on a large set of heuristics or manually designed filter banks.Engineering and Applied Science
Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images
Automated sample preparation and electron microscopy enables acquisition of
very large image data sets. These technical advances are of special importance
to the field of neuroanatomy, as 3D reconstructions of neuronal processes at
the nm scale can provide new insight into the fine grained structure of the
brain. Segmentation of large-scale electron microscopy data is the main
bottleneck in the analysis of these data sets. In this paper we present a
pipeline that provides state-of-the art reconstruction performance while
scaling to data sets in the GB-TB range. First, we train a random forest
classifier on interactive sparse user annotations. The classifier output is
combined with an anisotropic smoothing prior in a Conditional Random Field
framework to generate multiple segmentation hypotheses per image. These
segmentations are then combined into geometrically consistent 3D objects by
segmentation fusion. We provide qualitative and quantitative evaluation of the
automatic segmentation and demonstrate large-scale 3D reconstructions of
neuronal processes from a volume of brain
tissue over a cube of in each dimension corresponding to
1000 consecutive image sections. We also introduce Mojo, a proofreading tool
including semi-automated correction of merge errors based on sparse user
scribbles
Bbn Viser Trecvid 2012 Multimedia Event Detection And Multimedia Event Recounting Systems
We describe the Raytheon BBN Technologies (BBN) led VISER system for the TRECVID 2012 Multimedia Event Detection (MED) and Recounting (MER) tasks. We present a comprehensive analysis of the different modules in our evaluation system that includes: (1) a large suite of visual, audio and multimodal low-level features, (2) modules to detect semantic scene/action/object concepts over the entire video and within short temporal spans, (3) automatic speech recognition (ASR), and (4) videotext detection and recognition (OCR). For the low-level features we used multiple static, motion, color, and audio features previously considered in literature as well as a set of novel, fast kernel based feature descriptors developed recently by BBN. For the semantic concept detection systems, we leveraged BBN\u27s natural language processing (NLP) technologies to automatically analyze and identify salient concepts from short textual descriptions of videos and frames. Then, we trained detectors for these concepts using visual and audio features. The semantic concept based systems enable rich description of video content for event recounting (MER). The video level concepts have the most coverage and can provide robust concept detections on most videos. Segment level concepts are less robust, but can provide sequence information that enriches recounting. Object detection, ASR and OCR are sporadic in occurrence but have high precision and improves quality of the recounting. For the MED task, we combined these different streams using multiple early/feature level and late/score level fusion strategies. We present a rigorous analysis of each of these subsystems and the impact of different fusion strategies. In particular, we present a thorough study of different semantic feature based systems compared to low-level feature based systems considered in most MED systems. Consistent with previous MED evaluations, low-level features exhibit strong performance. Further, semantic feature based systems have comparable performance to the low-level system, and produce gains in fusion. Overall, BBN\u27s primary submission has an average missed detection rate of 29.6% with a false alarm rate of 2.6%. One of BBN\u27s contrastive runs has \u3c50% missed detection and \u3c4% false alarm rates for all twenty events
Ssecrett and NeuroTrace: Interactive Visualization and Analysis Tools for Large-Scale Neuroscience Data Sets
Recent advances in optical and electron microscopy let scientists acquire extremely high-resolution images for neuroscience research. Data sets imaged with modern electron microscopes can range between tens of terabytes to about one petabyte. These large data sizes and the high complexity of the underlying neural structures make it very challenging to handle the data at reasonably interactive rates. To provide neuroscientists flexible, interactive tools, the authors introduce Ssecrett and NeuroTrace, two tools they designed for interactive exploration and analysis of large-scale optical- and electron-microscopy images to reconstruct complex neural circuits of the mammalian nervous system.close273